fix(openaiShim): unconditionally set reasoning_content for DeepSeek V…#918
fix(openaiShim): unconditionally set reasoning_content for DeepSeek V…#918skeeminator wants to merge 1 commit intoGitlawb:mainfrom
Conversation
…4 thinking mode DeepSeek V4 (and similar reasoning providers) require the reasoning_content field to be present on every assistant message when thinking mode is enabled — even as an empty string. The previous guard only set it when thinking text was non-empty (typeof === 'string' && trim().length > 0), causing 400 errors on multi-turn tool-call rounds. Three sites fixed in convertMessages(): - Assistant messages with thinking blocks: now always set reasoning_content (fallback to empty string when thinking text is absent) - Coalescing synthetic interrupt message: added reasoning_content: '' - String-content fallback branch: added reasoning_content: '' when preserveReasoningContent is true
|
Will be impacted by #910 |
|
lets wait for #910 |
gnanam1990
left a comment
There was a problem hiding this comment.
Tight, surgical fix. Gating behind the existing preserveReasoningContent flag keeps non-DeepSeek providers unaffected, and the three-site coverage in convertMessages matches the failure modes described. Clear repro against api.deepseek.com is appreciated. LGTM.
Optional follow-up: a unit test for the preserveReasoningContent && !thinkingText path would harden this against future regressions.
|
Omgosh, deepseek is unusable right now in OpenClaude because of this :( Will keep following until there is a merge! thanks! |
|
#910 merged, please rebase and fix conflicts. |
|
hello @skeeminator kindly rebase and fix conflicts this is good to merge . |
Vasanthdev2004
left a comment
There was a problem hiding this comment.
Targeted maintainer triage review of the current head ($short).
Verdict: Needs changes
Blocking issue:
- GitHub reports this branch as DIRTY / conflicting with main, so it cannot be merged or final-approved as-is. Please rebase or merge latest main, resolve the conflicts, and rerun the relevant checks.
I did not do a full code review because the current branch state is not mergeable. Happy to re-review once the branch is clean.
Summary
reasoning_contentpresent on every assistant message in the request history when thinking
mode is active. The API accepts empty string — the field just needs to
exist.
reasoning_contentwhenthinkingText.trim().length > 0,so tool-call turns and edge cases (synthetic interrupts, non-array content)
sent assistant messages without the field → 400.
reasoning_contentwhenpreserveReasoningContentis true,falling back to
""when thinking text is missing. Two additional sites(coalescing interrupt, string-content else branch) now also emit the field.
Impact
API Error: 400 {"error":{"message":"The reasoning_content in the thinking mode must be passed back to the API."}}for DeepSeek V4 models (
deepseek-v4-pro,deepseek-v4-flash) againstapi.deepseek.com. Tool-call rounds and plain multi-turn now work.preserveReasoningContentflag. Non-DeepSeek providers (OpenAI, Gemini, Mistral, etc.) unaffected.
Testing
bun run buildbun run smokebun test src/services/api/openaiShim.test.tsdeepseek-v4-proagainstapi.deepseek.com—multi-turn tool-call round completed without 400
Notes
api.deepseek.com) withdeepseek-v4-pro